Statistical Biases in Backpropagation Learning

نویسنده

  • Chris Thornton
چکیده

The paper investigates the statistical eeects which may need to be exploited in supervised learning. It notes that these eeects can be classiied according to their conditionality and their order and proposes that learning algorithms will typically have some form of bias towards particular classes of eeect. It presents the results of an empirical study of the statistical bias of backpropagation. The study involved applying the algorithm to a wide range of learning problems using a variety of diierent internal architectures. The results of the study revealed that backpropagation has a very speciic bias in the general direction of statistical rather than rela-tional eeects. The paper shows how the existence of this bias eeectively constitutes a weakness in the algorithm's ability to discount noise.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Learning Dynamcis of a Universal Approximator

The learning properties of a universal approximator, a normalized committee machine with adjustable biases, are studied for on-line back-propagation learning. Within a statistical mechanics framework, numerical studies show that this model has features which do not exist in previously studied two-layer network models without adjustable biases, e.g., attractive suboptimal symmetric phases even f...

متن کامل

Deep learning from crowds

Over the last few years, deep learning has revolutionized the field of machine learning by dramatically improving the state-of-the-art in various domains. However, as the size of supervised artificial neural networks grows, typically so does the need for larger labeled datasets. Recently, crowdsourcing has established itself as an efficient and cost-effective solution for labeling large sets of...

متن کامل

A New Backpropagation Algorithm without Gradient Descent

The backpropagation algorithm, which had been originally introduced in the 1970s, is the workhorse of learning in neural networks. This backpropagation algorithm makes use of the famous machine learning algorithm known as Gradient Descent, which is a first-order iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function using gradient descent, ...

متن کامل

Generation of Attributes for Learning Algorithms

Inductive algorithms rely strongly on their representational biases, Constructive induction can mitigate representational inadequacies. This paper introduces the notion of a relative gain measure and describes a new constructive induction algorithm (GALA) which is independent of the learning algorithm. Unlike most previous research on constructive induction, our methods are designed as preproce...

متن کامل

Statistical Factors in Behaviour Learning

Various researchers have looked at ways of using supervised learning (i.e., training) to obtain adaptive robotic behaviours, e.g., 1,2]. However, the limitations of this approach are still unclear. This paper presents the results of an empirical study involving three behaviours and three, well-known learning algorithms. The results of the study suggest that ordinary supervised learning algorith...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1994